Post-hoc explanation methods have become increasingly depended upon for understanding black-box classifiers in high-stakes applications, precipitating a need for reliable explanations. While numerous explanation methods have been proposed, recent works have shown that many existing methods can be inconsistent or unstable. In addition, high-performing classifiers are often highly nonlinear and can exhibit complex behavior around the decision boundary, leading to brittle or misleading local explanations. Therefore, there is an impending need to quantify the uncertainty of such explanation methods in order to understand when explanations are trustworthy. We introduce a novel uncertainty quantification method parameterized by a Gaussian Process model, which combines the uncertainty approximation of existing methods with a novel geodesic-based similarity which captures the complexity of the target black-box decision boundary. The proposed framework is highly flexible; it can be used with any black-box classifier and feature attribution method to amortize uncertainty estimates for explanations. We show theoretically that our proposed geodesic-based kernel similarity increases with the complexity of the decision boundary. Empirical results on multiple tabular and image datasets show that our decision boundary-aware uncertainty estimate improves understanding of explanations as compared to existing methods.
translated by 谷歌翻译
有限的作品显示无监督的分布(OOD)方法对复杂的医疗数据的功效。在这里,我们展示了我们无监督的OOD检测算法,SIMCLR-LOF的初步调查结果,以及在医学图像上应用的最近现实方法(SSD)的最新状态。SIMCLR-LOF使用SIMCLR学习语义有意义的功能,如果测试样本是ood的,则使用LOF进行评分。我们在多源国际皮肤成像协作(ISIC)2019数据集上进行了评估,并显示与SSD竞争的结果以及应用于同一数据的最近监督方法。
translated by 谷歌翻译
Much recent work in task-oriented parsing has focused on finding a middle ground between flat slots and intents, which are inexpressive but easy to annotate, and powerful representations such as the lambda calculus, which are expressive but costly to annotate. This paper continues the exploration of task-oriented parsing by introducing a new dataset for parsing pizza and drink orders, whose semantics cannot be captured by flat slots and intents. We perform an extensive evaluation of deep-learning techniques for task-oriented parsing on this dataset, including different flavors of seq2seq systems and RNNGs. The dataset comes in two main versions, one in a recently introduced utterance-level hierarchical notation that we call TOP, and one whose targets are executable representations (EXR). We demonstrate empirically that training the parser to directly generate EXR notation not only solves the problem of entity resolution in one fell swoop and overcomes a number of expressive limitations of TOP notation, but also results in significantly greater parsing accuracy.
translated by 谷歌翻译
Explainability has been widely stated as a cornerstone of the responsible and trustworthy use of machine learning models. With the ubiquitous use of Deep Neural Network (DNN) models expanding to risk-sensitive and safety-critical domains, many methods have been proposed to explain the decisions of these models. Recent years have also seen concerted efforts that have shown how such explanations can be distorted (attacked) by minor input perturbations. While there have been many surveys that review explainability methods themselves, there has been no effort hitherto to assimilate the different methods and metrics proposed to study the robustness of explanations of DNN models. In this work, we present a comprehensive survey of methods that study, understand, attack, and defend explanations of DNN models. We also present a detailed review of different metrics used to evaluate explanation methods, as well as describe attributional attack and defense methods. We conclude with lessons and take-aways for the community towards ensuring robust explanations of DNN model predictions.
translated by 谷歌翻译
寻找两人差异游戏的NASH平衡政策需要解决汉密尔顿 - 雅各布-ISAACS PDES。最近的研究通过采用自我监督(物理知识的)神经网络作为通用价值近似值,在解决这种PDE方面的诅咒方面取得了成功。本文从具有连续值的零和零游戏上的SOTA延伸到具有不连续值的通用游戏,其中不连续性是由玩家的损失引起的。我们表明,由于缺乏对不连续损失的融合证明和概括分析,现有的自我监督学习技术未能概括并引起自动驾驶应用程序中的安全问题。我们的解决方案是首先先预先培训纳什平衡的价值网络,然后通过最大程度地减少将监督数据与PDE和边界条件相结合的损失来对其进行完善。重要的是,提出的学习方法的证明优势针对纯监督和自我监督的方法需要仔细选择神经激活功能:在$ \ texttt {relu} $中} $,我们表明$ \ texttt {tanh} $是实现最佳概括和安全性能的唯一选择。我们的猜想是$ \ texttt {tanh} $(类似于$ \ texttt {sin} $)允许价值连续性及其梯度,这足以满足学习的收敛性,同时也足够表达(类似于$ \ texttt {relu} $)以近似值的价值景观。最后,我们将我们的方法应用于近似控制策略的不完整信息相互作用,并证明了其对安全相互作用的贡献。
translated by 谷歌翻译
由于交通环境的复杂性和波动性,自主驾驶中的决策是一个显着难的问题。在这个项目中,我们使用深度Q-network,以及基于规则的限制来使车道变化的决定。可以通过将高级横向决策与基于低级规则的轨迹监视相结合来获得安全高效的车道改变行为。预计该代理商在培训中,在实际的UDAcity模拟器中进行了适当的车道更换操作,总共100次发作。结果表明,基于规则的DQN比DQN方法更好地执行。基于规则的DQN达到0.8的安全速率和47英里/小时的平均速度
translated by 谷歌翻译
我们介绍折扣,一种用于学习通用音频表示的自我监督的预训练方法。我们的系统基于群集:它利用了离线群集步骤来提供充当伪标签的目标标签,用于解决预测任务。我们开发了最近的自我监督学习近期进步,为计算机愿景和设计轻量级,易于使用的自我监督的预训练计划。我们在大型音频数据集的平衡子集上预先列车脱换嵌入式,并将这些表示转移到9个下游分类任务,包括语音,音乐,动物声音和声学场景。此外,我们开展识别关键设计选择的消融研究,并通过公开提供所有代码和预先训练的型号。
translated by 谷歌翻译
当前的量子点(QD)设备的自动传动方法在显示出一些成功的同时,缺乏对数据可靠性的评估。当自主系统处理嘈杂或低质量数据时,这会导致意外的失败。在这项工作中,我们为QD设备的强大自动调整提供了一个框架,该QD设备将机器学习(ML)状态分类器与数据质量控制模块结合在一起。数据质量控制模块充当“守门人”系统,确保只有国家分类器处理可靠的数据。较低的数据质量会导致设备重新校准或终止。为了训练两个ML系统,我们通过结合QD实验的典型合成噪声来增强QD仿真。我们确认,在状态分类器的训练中包含合成噪声可以显着提高性能,在测试实验数据时,准确性为95.0(9)%。然后,我们通过表明状态分类器的性能随着预期的数据质量而恶化,从而验证数据质量控制模块的功能。我们的结果为嘈杂的QD设备的自动调整建立了强大而灵活的ML框架。
translated by 谷歌翻译
熟练的水流预测可以为水政策和管理各个领域的决策提供信息。我们集成了数值天气预测集合和分布式水文模型,以在中范围的交货时间(1-7天)下生成集合流量预测。我们展示了一项用于在美国东部的Susquehanna河流盆地的后处理过程中进行机器学习应用的案例研究。为了进行预测验证,我们使用不同的指标,例如技能得分和可靠性图,以提前时间,流量阈值和季节为条件。验证结果表明,机器学习后处理器可以改善相对于低复杂性预测(例如气候和时间持久性)以及确定性和原始集合预测的水流预测。与原始合奏相比,与较短的交货时间相比,在中等时间表的相对增益在后期时间表通常更高。与低压流相比,高流量和与凉爽的流量相比。总体而言,我们的结果突出了机器学习在许多方面的好处,以提高流量预测的技能和可靠性。
translated by 谷歌翻译
随着空间的尺寸增加,在真实数据中分类高维形状的问题在复杂性中增长。对于识别不同几何形状的凸形形状的情况,最近提出了一种新的分类框架,其中使用一种称为射线的一组一维表示的交叉点,其中具有形状的边界来识别特定几何形状。基于射线的分类(RBC)已经使用两维和三维形状的合成数据集进行了经验验证的(Zwolak等人。在第三讲习班关于机器学习和物理科学(Neurips 2020),温哥华,加拿大的第三次研讨会的程序中[ arxiv:2010年12月11日,2010年12月11日,最近也已经通过实验验证(Zwolak等,Prx量子2:020335,2021)。在这里,我们建立了由关键角度度量定义的形状分类所需的光线数量的绑定,用于任意凸形形状。对于两个维度,我们在形状的长度,直径和外部角度方面导出了射线数量的下限。对于$ \ mathbb {r} ^ n $的凸多台,我们将此结果概括为与二向角度的函数和多边形面的几何参数给出的类似绑定。该结果使得能够使用比体积或基于表面的方法基本更少的数据元素估计高维形状的不同方法。
translated by 谷歌翻译